26 research outputs found

    A Formal Study of the Privacy Concerns in Biometric-Based Remote Authentication Schemes

    Get PDF
    With their increasing popularity in cryptosystems, biometrics have attracted more and more attention from the information security community. However, how to handle the relevant privacy concerns remains to be troublesome. In this paper, we propose a novel security model to formalize the privacy concerns in biometric-based remote authentication schemes. Our security model covers a number of practical privacy concerns such as identity privacy and transaction anonymity, which have not been formally considered in the literature. In addition, we propose a general biometric-based remote authentication scheme and prove its security in our security model

    An Application of the Boneh and Shacham Group Signature Scheme to Biometric Authentication

    Get PDF
    The original publication is available at www.springerlink.comInternational audienceWe introduce a new way for generating strong keys from biometric data. Contrary to popular belief, this leads us to biometric keys which are easy to obtain and renew. Our solution is based on two-factor authentication: a low-cost card and a biometric trait are involved. Following the Boneh and Shacham group signature construction, we introduce a new biometric-based remote authentication scheme. Surprisingly, for ordinary uses no in- teractions with a biometric database are needed in this scheme. As a side effect of our proposal, privacy of users is easily obtained while it can possibly be removed, for instance under legal warrant

    The forward physics facility at the high-luminosity LHC

    Get PDF
    High energy collisions at the High-Luminosity Large Hadron Collider (LHC) produce a large number of particles along the beam collision axis, outside of the acceptance of existing LHC experiments. The proposed Forward Physics Facility (FPF), to be located several hundred meters from the ATLAS interaction point and shielded by concrete and rock, will host a suite of experiments to probe standard model (SM) processes and search for physics beyond the standard model (BSM). In this report, we review the status of the civil engineering plans and the experiments to explore the diverse physics signals that can be uniquely probed in the forward region. FPF experiments will be sensitive to a broad range of BSM physics through searches for new particle scattering or decay signatures and deviations from SM expectations in high statistics analyses with TeV neutrinos in this low-background environment. High statistics neutrino detection will also provide valuable data for fundamental topics in perturbative and non-perturbative QCD and in weak interactions. Experiments at the FPF will enable synergies between forward particle production at the LHC and astroparticle physics to be exploited. We report here on these physics topics, on infrastructure, detector, and simulation studies, and on future directions to realize the FPF's physics potential

    Evaluation of Topological Vulnerability of the Internet under Regional Failures

    No full text
    Part 2: WorkshopInternational audienceNatural disasters often lead to regional failures which can fail down network nodes and links co-located in a large geographical area. It will be beneficial to improve the resilience of a network by assessing its vulnerability under regional failures. In this paper, we propose the concept of α-critical-distance to evaluate the importance of a network node in the geographical space with a given failure impact ratio α. Theoretical analysis and a polynomial time algorithm to find the minimal α-critical-distance of a network are presented. Using real Internet topology data, we conduct experiments to compute the minimal α-critical-distances for different networks. The computational results demonstrate the differences of vulnerability of different networks. We also find that with the same impact ratio α, the studied topologies have smaller α-critical-distances when the network performance is measured by network efficiency than giant component size

    Reframing Book Publishing in the Age of Networking

    Get PDF
    This paper presents preliminary results of an on-going examination of book publishing practices emerging through the complex interaction of technological, economic, and socio-cultural factors in the networking environment of the Internet. The theoretical framework guiding the study is a diachronic definition of book publishing proposed by Thomas R. Adams and Nicolas Barker in ???A New Model for the Study of the Book,??? first published in A Potency of Life: Books in Society, by the British Library in 1993. The Adams/Barker definition of publishing focuses on ???the initial decision to multiply a text or image for distribution,???. In this paper, we propose what we intend as a friendly amendment to their definition: for our purposes, a book publisher is an individual or a collective that makes the initial decisions and arrangements for multiple copies of books to be publicly available for distribution. The methodology for this work was to study a purposive sample of book publishers found on the Internet that fit our definitional framework. Our final sample, which we call emerging publishers, is just under 300 publishers. This sample was divided into three categories: Category I: Book Publishing Entities; Category II Author as Publisher and Category III Channels to Market. Each category is sub-divided, defined and described. Tables are included which show the publishers in each category. The paper concludes with observations across categories about format, shifts in publishers??? roles, standards of publishing practice, costs, and discovery, reception and reading and survival

    Responsive Round Complexity and Concurrent ZeroKnowledge

    No full text
    Abstract. The number of communication rounds is a classic complexity measure for protocols; reducing round complexity is a major goal in protocol design. However, when the communication time is inconstant, and in particular, when one of the parties intentionally delays its messages, the round complexity measure may become meaningless. For example, if one of the rounds takes longer than the rest of the protocol, then it does not matter if the round complexity is bounded by a constant or by a polynomial. In this paper, we propose a complexity measure called responsive round complexity. Loosely speaking, a protocol has responsive round complexity m with respect to Party A, if it makes the following guarantee. If A’s longest delay in responding to a message in a run of the protocol is t, then, in that run, the overall communication time is at most m · t. The logic behind this definition is that if a party responds quickly to a message, whether it has a good connection or it just chooses not to delay its messages, then this party deserves to get an overall quicker running time. Responsive round complexity is particularly interesting in a setting where a party may gain something by delaying its messages. In this case, the delaying party does not deserve the same response time as another party that behaves nicely. We demonstrate the significance of responsive round complexity by presenting a new protocol for concurrent zero-knowledge. The new protocol is a black-box concurrent zero knowledge proof for all languages in NP with round complexity Õ(log2 n) but responsive round complexity Õ(log n). While the round complexity of the new protocol is similar to what is known from previous works, its responsive round complexity is a significant improvement: all known concurrent zero-knowledge protocols require Õ(log2 n) rounds. Furthermore, in light of the known lower bounds, the responsive round complexity of this protocol is basically optimal. Keywords: Zero-knowledge, concurrent zero-knowledge, cryptographic protocols. 424 Tzafrir Cohen, Joe Kilian, and Erez Petrank

    On the Feasibility of Consistent Computations

    Get PDF
    Abstract. In many practical settings, participants are willing to deviate from the protocol only if they remain undetected. Aumann and Lindell introduced a concept of covert adversaries to formalize this type of corruption. In the current paper, we refine their model to get stronger security guarantees. Namely, we show how to construct protocols, where malicious participants cannot learn anything beyond their intended outputs and honest participants can detect malicious behavior that alters their outputs. As this construction does not protect honest parties from selective protocol failures, a valid corruption complaint can leak a single bit of information about the inputs of honest parties. Importantly, it is often up to the honest party to decide whether to complain or not. This potential leakage is often compensated by gains in efficiency—many standard zero-knowledge proof steps can be omitted. As a concrete practical contribution, we show how to implement consistent versions of several important cryptographic protocols such as oblivious transfer, conditional disclosure of secrets and private inference control

    Efficient Tree-Based Revocation in Groups of Low-State Devices

    No full text
    Abstract. We study the problem of broadcasting confidential information to a collection of n devices while providing the ability to revoke an arbitrary subset of those devices (and tolerating collusion among the revoked devices). In this paper, we restrict our attention to low-memory devices, that is, devices that can store at most O(log n) keys. We consider solutions for both zero-state and low-state cases, where such devices are organized in a tree structure T. We allow the group controller to encrypt broadcasts to any subtree of T,evenifthetreeisbasedonanmulti-way organizational chart or a severely unbalanced multicast tree.
    corecore